Generalist models, which are capable of performing diverse multi-modal tasks in a task-agnostic way within a single model, have been explored recently. Being, hopefully, an alternative to approaching general-purpose AI, existing generalist models are still at an early stage, where modality and task coverage is limited. To empower multi-modal task-scaling and speed up this line of research, we release a generalist model learning system, OFASys, built on top of a declarative task interface named multi-modal instruction. At the core of OFASys is the idea of decoupling multi-modal task representations from the underlying model implementations. In OFASys, a task involving multiple modalities can be defined declaratively even with just a single line of code. The system automatically generates task plans from such instructions for training and inference. It also facilitates multi-task training for diverse multi-modal workloads. As a starting point, we provide presets of 7 different modalities and 23 highly-diverse example tasks in OFASys, with which we also develop a first-in-kind, single model, OFA+, that can handle text, image, speech, video, and motion data. The single OFA+ model achieves 95% performance in average with only 16% parameters of 15 task-finetuned models, showcasing the performance reliability of multi-modal task-scaling provided by OFASys. Available at https://github.com/OFA-Sys/OFASys
translated by 谷歌翻译
Multi-modal image-text models such as CLIP and LiT have demonstrated impressive performance on image classification benchmarks and their zero-shot generalization ability is particularly exciting. While the top-5 zero-shot accuracies of these models are very high, the top-1 accuracies are much lower (over 25% gap in some cases). We investigate the reasons for this performance gap and find that many of the failure cases are caused by ambiguity in the text prompts. First, we develop a simple and efficient zero-shot post-hoc method to identify images whose top-1 prediction is likely to be incorrect, by measuring consistency of the predictions w.r.t. multiple prompts and image transformations. We show that our procedure better predicts mistakes, outperforming the popular max logit baseline on selective prediction tasks. Next, we propose a simple and efficient way to improve accuracy on such uncertain images by making use of the WordNet hierarchy; specifically we augment the original class by incorporating its parent and children from the semantic label hierarchy, and plug the augmentation into text promts. We conduct experiments on both CLIP and LiT models with five different ImageNet-based datasets. For CLIP, our method improves the top-1 accuracy by 17.13% on the uncertain subset and 3.6% on the entire ImageNet validation set. We also show that our method improves across ImageNet shifted datasets and other model architectures such as LiT. Our proposed method is hyperparameter-free, requires no additional model training and can be easily scaled to other large multi-modal architectures.
translated by 谷歌翻译
Fairness has become a trending topic in natural language processing (NLP), which addresses biases targeting certain social groups such as genders and religions. However, regional bias in language models (LMs), a long-standing global discrimination problem, still remains unexplored. This paper bridges the gap by analysing the regional bias learned by the pre-trained language models that are broadly used in NLP tasks. In addition to verifying the existence of regional bias in LMs, we find that the biases on regional groups can be strongly influenced by the geographical clustering of the groups. We accordingly propose a HiErarchical Regional Bias evaluation method (HERB) utilising the information from the sub-region clusters to quantify the bias in pre-trained LMs. Experiments show that our hierarchical metric can effectively evaluate the regional bias with respect to comprehensive topics and measure the potential regional bias that can be propagated to downstream tasks. Our codes are available at https://github.com/Bernard-Yang/HERB.
translated by 谷歌翻译
在深度感知的固有歧义的范围内,现代相机的3D对象检测方法属于性能瓶颈。从直觉上讲,利用时间多视角立体声(MVS)技术是解决这种歧义的自然知识。但是,在适用于3D对象检测场景时,MV的传统尝试在两个方面存在缺陷:1)所有观点之间的亲和力测量遭受昂贵的计算成本; 2)很难处理经常移动物体的室外场景。为此,我们引入了一种有效的时间立体声方法,以动态选择匹配候选者的尺度,从而显着减少计算开销。更进一步,我们设计了一种迭代算法,以更新更有价值的候选人,使其适应移动候选人。我们将我们提出的方法实例化,以进行多视图3D检测器,即Bevstereo。 Bevstereo在Nuscenes数据集的仅相机轨道上实现了新的最先进的性能(即52.5%地图和61.0%NDS)。同时,广泛的实验反映了我们的方法比当代MVS方法更好地处理复杂的室外场景。代码已在https://github.com/megvii astection/bevstereo上发布。
translated by 谷歌翻译
学习准确的深度对于多视图3D对象检测至关重要。最近的方法主要是从单眼图像中学习深度,由于单眼深度学习的性质不足,这会面临固有的困难。在这项工作中,我们提出了一种新颖的环绕时间立体声(STS)技术,而不是使用唯一的单眼深度方法,而是利用跨时间之间的几何对应关系来促进准确的深度学习。具体而言,我们将自我车辆周围所有相机的视野视为统一的视图,即环绕浏览量,并在其上进行暂时立体声匹配。利用与STS不同框架之间的几何对应关系并与单眼深度结合在一起,以产生最终的深度预测。关于Nuscenes的综合实验表明,STS极大地提高了3D检测能力,特别是对于中距离和长距离对象。在带有RESNET-50骨架的BEVDEPTH上,STS分别提高了MAP和NDS,分别提高了2.6%和1.4%。当使用较大的主链和较大的图像分辨率时,观察到一致的改进,证明了其有效性
translated by 谷歌翻译
当使用深度学习技术对程序语言进行建模时,广泛采用了带有树或图形结构的神经网络,以捕获程序抽象语法树(AST)中的丰富结构信息。但是,计划中广泛存在长期/全球依赖性,大多数这些神经体系结构无法捕获这些依赖性。在本文中,我们提出了Tree-Transformer,这是一种新型的递归树结构神经网络,旨在克服上述局限性。树转化器利用两个多头注意单元来建模兄弟姐妹和父子节点对之间的依赖关系。此外,我们提出了一个双向传播策略,以允许节点信息向两个方向传递:沿树木的自下而上和自上而下。通过结合自下而上和自上而下的传播,树转化器可以同时学习全局上下文和有意义的节点特征。广泛的实验结果表明,我们的树转换器在具有树级和节点级别的预测任务中,在与程序相关的任务中优于现有基于树或基于图的神经网络,这表明Tree-Transformer在学习两个树级时都表现良好和节点级表示。
translated by 谷歌翻译
最近,已经开发了许多自动白细胞(WBC)或白细胞分类技术。但是,所有这些方法仅利用单个模态显微图像,即基于血液涂片或荧光,因此缺少从多模式图像中学习更好的潜力。在这项工作中,我们基于WBC分类任务的第一个多模式WBC数据集开发了有效的多模式体系结构。具体而言,我们提出的想法是通过两个步骤开发的 - 1)首先,我们仅在单个网络中学习模式特定的独立子网; 2)我们通过从高复杂性独立教师网络中提取知识来进一步增强独立子网的学习能力。因此,我们提出的框架可以实现高性能,同时保持多模式数据集的复杂性较低。我们的独特贡献是两倍-1)我们提出了用于WBC分类的同类多模式WBC数据集的第一个; 2)我们开发了高性能的多模式体系结构,同时也有效且复杂性低。
translated by 谷歌翻译
在本文中,我们介绍了OpenMedia,这是一个开源工具箱库,其中包含在异质人工智能(AI)计算平台下用于医学图像分析的丰富深度学习方法。各种医学图像分析方法,包括2D $/$ 3D医疗图像分类,细分,本地化和检测,已与Pytorch和$/$或Mindspore实现在异质NVIDIA和HUAWEI ASCEND ASCEND Computing系统下包含在工具箱中。据我们所知,OpenMedia是第一个提供Pytorch和Mindsp的开源算法库
translated by 谷歌翻译
在本文中,我们使用单个摄像头和惯性测量单元(IMU)以及相应的感知共识问题(即,所有观察者的独特性和相同的ID)来解决基于视觉的检测和跟踪多个航空车的问题。我们设计了几种基于视觉的分散贝叶斯多跟踪滤波策略,以解决视觉探测器算法获得的传入的未分类测量与跟踪剂之间的关联。我们根据团队中代理的数量在不同的操作条件以及可扩展性中比较它们的准确性。该分析提供了有关给定任务最合适的设计选择的有用见解。我们进一步表明,提出的感知和推理管道包括深度神经网络(DNN),因为视觉目标检测器是轻量级的,并且能够同时运行控制和计划,并在船上进行大小,重量和功率(交换)约束机器人。实验结果表明,在各种具有挑战性的情况(例如重闭)中,有效跟踪了多个无人机。
translated by 谷歌翻译
本文解决了对预先训练的深神经网络进行排名并筛选最下游任务的重要问题。这是具有挑战性的,因为每个任务的基本模型排名只能通过微调目标数据集中的预训练模型来生成,该模型是蛮力且计算昂贵的。最近的高级方法提出了几个轻巧的可转移性指标来预测微调结果。但是,这些方法仅捕获静态表示,但忽略了微调动态。为此,本文提出了一个新的可传递性度量,称为\ textbf {s} elf-challenging \ textbf {f} isher \ textbf {d} is Criminant \ textbf {a} nalisy(\ textbf {\ textbf {sfda})现有作品没有的有吸引力的好处。首先,SFDA可以将静态特征嵌入渔民空间中,并完善它们,以在类之间更好地分离性。其次,SFDA使用一种自我挑战的机制来鼓励不同的预训练模型来区分硬性示例。第三,SFDA可以轻松地为模型集合选择多个预训练的模型。 $ 33 $预培训的$ 11 $下游任务的$ 33 $预培训模型的广泛实验表明,在测量预训练模型的可传递性时,SFDA具有高效,有效和健壮。例如,与最先进的方法NLEEP相比,SFDA平均显示了59.1美元的增益,同时带来了$ 22.5 $ x的墙壁速度速度。该代码将在\ url {https://github.com/tencentarc/sfda}上提供。
translated by 谷歌翻译